Skip to main content

All Questions

1vote
1answer
45views

RFECV and grid search - what sets to use for hyperparameter tuning?

I am running machine learning models (all with sci-kit learn estimators, no neural networks) using a custom dataset with a number of features and binomial output. I first split the dataset into 0.6 (...
Alex's user avatar
0votes
0answers
271views

Correct method to report Randomized Search CV results

I have searched online but I still cannot find a definitive answer on how to "correctly" report the results from hyperparameter tuning a machine learning model; though, this may just be some ...
user167433's user avatar
0votes
1answer
377views

why sign flip to indicate loss in hyperopt? [closed]

I am using the hyperopt to find best hyperparameters for Random forest. My objective is to get the parameters which returns the best f1-score as my dataset is ...
The Great's user avatar
0votes
2answers
974views

GridSearch on imbalanced datasets

Im trying to use gridsearch to find the best parameter for my model. Knowing that I have to implement nearmiss undersampling method while doing cross validation, should I fit my gridsearch on my ...
Valentin's user avatar
0votes
1answer
1kviews

Does GridSearchCV not save the best parameters?

So I tuned the hyperparameters using GridSearchCV, fitted the model to the data, and then used best_params_. I'm just curious ...
srp's user avatar
2votes
3answers
23kviews

Hyper-parameter tuning of NaiveBayes Classier

I'm fairly new to machine learning and I'm aware of the concept of hyper-parameters tuning of classifiers, and I've come across a couple of examples of this technique. However, I'm trying to use ...
Sameer Zahid's user avatar
1vote
0answers
49views

Minimizing overfitting when doing hyperparameter Tuning

Generaly when using Sklearn's GridSearchCV (or RandomizedGridSearchCV), we get best model with best test score even if the model overfits a little bit. How can we compute generalization error ...
Amine Benatmane's user avatar

close